由于在现实世界应用中广泛使用复杂的机器学习模型,解释模型预测变得至关重要。但是,这些模型通常是黑盒深神经网络,通过具有已知忠实限制的方法来解释事后。广义添加剂模型(GAM)是一种可解释的模型类别,通过分别学习每个功能的非线性形状函数来解决此限制,然后在顶部进行线性模型。但是,这些模型通常很难训练,需要许多参数,并且难以扩展。我们提出了一个全新的游戏亚家族,以利用形状函数的基础分解。在所有功能之间共享少数基础函数,并共同用于给定任务,因此使我们的模型比例更好地到具有高维功能的大规模数据,尤其是当功能稀疏时。我们提出了一种表示是神经基依据(NBM)的体系结构,该模型使用单个神经网络来学习这些基础。在各种表格和图像数据集上,我们证明,对于可解释的机器学习,NBMS是准确性,模型大小和吞吐量的最先进,并且可以轻松模拟所有高阶特征交互。源代码可在https://github.com/facebookresearch/nbm-pam上获得。
translated by 谷歌翻译
广义添加剂模型(GAM)迅速成为完全解释的机器学习的主要选择。但是,与不可解释的方法(例如DNNS)不同,它们缺乏表达能力和易于可扩展性,因此对于实际任务而言并不是可行的替代方法。我们提出了一个新的游戏类,该类别使用多项式的张量秩分解来学习功能强大的,{\ em完全解释}模型。我们的方法标题为“可扩展多项式添加剂模型(垃圾邮件”)是毫不舒服的可扩展性,并且模型{\ em all}的高阶特征交互没有组合参数爆炸。垃圾邮件的表现优于所有当前可解释的方法,并在一系列现实世界的基准测试中匹配DNN/XGBoost性能,并具有多达数十万个功能。我们通过人类主题评估证明,垃圾邮件在实践中明显更容易解释,因此是DNN毫不费力的替代者,用于创建适合大规模机器学习的可解释和高性能系统。源代码可在https://github.com/facebookresearch/nbm-pam上获得。
translated by 谷歌翻译
域泛化涉及从异构地收集培训来源的分类器,以便它推广到从类似的未知目标域中汲取的数据,具有大规模学习和个性化推断的应用。在许多设置中,隐私问题禁止获取培训数据样本的域标签,而是只有汇总培训点集合。利用域标签来创建域不变特征表示的现有方法在此设置中不可应用,需要替代方法来学习概括的分类器。在本文中,我们提出了一个解决这个问题的域 - 自适应方法,它分为两个步骤:(a)我们在仔细选择的特征空间内培训数据来创建伪域,(b)使用这些伪域学习域 - 自适应分类器,该分类器使用有关它所属的输入和伪域的信息进行预测。我们的方法在各种域泛化基准测试中实现了最先进的性能,而无需使用域标签。此外,我们使用群集信息提供关于域泛化的新颖理论保障。我们的方法可以适用于基于集合的方法,即使在大型基准数据集上也可以提供大量的收益。代码可以在:https://github.com/xavierohan/adaclust_domainbed
translated by 谷歌翻译
合作匪徒问题越来越多地成为其在大规模决策中的应用。然而,对此问题的大多数研究专注于具有完美通信的环境,而在大多数现实世界分布式设置中,通信通常是随机网络,具有任意损坏和延迟。在本文中,我们在三个典型的真实沟通场景下研究了合作匪徒学习,即(a)通过随机时变网络的消息传递,(b)通过随机延迟的网络瞬时奖励共享(c )通过对冲损坏的奖励来传递消息,包括拜占庭式沟通。对于每个环境中的每一个,我们提出了实现竞争性能的分散算法,以及在发生的群体后悔的近乎最佳保证。此外,在具有完美通信的环境中,我们提出了一种改进的延迟更新算法,其优于各种网络拓扑的现有最先进的算法。最后,我们在集团后悔呈现紧密的网络依赖性最低限度。我们所提出的算法很简单,以实现和获得竞争性的经验性能。
translated by 谷歌翻译
Reduced system dependability and higher maintenance costs may be the consequence of poor electric power quality, which can disturb normal equipment performance, speed up aging, and even cause outright failures. This study implements and tests a prototype of an Online Sequential Extreme Learning Machine (OS-ELM) classifier based on wavelets for detecting power quality problems under transient conditions. In order to create the classifier, the OSELM-network model and the discrete wavelet transform (DWT) method are combined. First, discrete wavelet transform (DWT) multi-resolution analysis (MRA) was used to extract characteristics of the distorted signal at various resolutions. The OSELM then sorts the retrieved data by transient duration and energy features to determine the kind of disturbance. The suggested approach requires less memory space and processing time since it can minimize a large quantity of the distorted signal's characteristics without changing the signal's original quality. Several types of transient events were used to demonstrate the classifier's ability to detect and categorize various types of power disturbances, including sags, swells, momentary interruptions, oscillatory transients, harmonics, notches, spikes, flickers, sag swell, sag mi, sag harm, swell trans, sag spike, and swell spike.
translated by 谷歌翻译
Modern Deep Learning (DL) models have grown to sizes requiring massive clusters of specialized, high-end nodes to train. Designing such clusters to maximize both performance and utilization to amortize their steep cost is a challenging task requiring careful balance of compute, memory, and network resources. Moreover, a plethora of each model's tuning knobs drastically affect the performance, with optimal values often depending on the underlying cluster's characteristics, which necessitates a complex cluster-workload co-design process. To facilitate the design space exploration of such massive DL training clusters, we introduce COMET a holistic cluster design methodology and workflow to jointly study the impact of parallelization strategies and key cluster resource provisioning on the performance of distributed DL training. We develop a step-by-step process to establish a reusable and flexible methodology, and demonstrate its application with a case study of training a Transformer-1T model on a cluster of variable compute, memory, and network resources. Our case study demonstrates COMET's utility in identifying promising architectural optimization directions and guiding system designers in configuring key model and cluster parameters.
translated by 谷歌翻译
Chromosome analysis is essential for diagnosing genetic disorders. For hematologic malignancies, identification of somatic clonal aberrations by karyotype analysis remains the standard of care. However, karyotyping is costly and time-consuming because of the largely manual process and the expertise required in identifying and annotating aberrations. Efforts to automate karyotype analysis to date fell short in aberration detection. Using a training set of ~10k patient specimens and ~50k karyograms from over 5 years from the Fred Hutchinson Cancer Center, we created a labeled set of images representing individual chromosomes. These individual chromosomes were used to train and assess deep learning models for classifying the 24 human chromosomes and identifying chromosomal aberrations. The top-accuracy models utilized the recently introduced Topological Vision Transformers (TopViTs) with 2-level-block-Toeplitz masking, to incorporate structural inductive bias. TopViT outperformed CNN (Inception) models with >99.3% accuracy for chromosome identification, and exhibited accuracies >99% for aberration detection in most aberrations. Notably, we were able to show high-quality performance even in "few shot" learning scenarios. Incorporating the definition of clonality substantially improved both precision and recall (sensitivity). When applied to "zero shot" scenarios, the model captured aberrations without training, with perfect precision at >50% recall. Together these results show that modern deep learning models can approach expert-level performance for chromosome aberration detection. To our knowledge, this is the first study demonstrating the downstream effectiveness of TopViTs. These results open up exciting opportunities for not only expediting patient results but providing a scalable technology for early screening of low-abundance chromosomal lesions.
translated by 谷歌翻译
Human activity recognition (HAR) using IMU sensors, namely accelerometer and gyroscope, has several applications in smart homes, healthcare and human-machine interface systems. In practice, the IMU-based HAR system is expected to encounter variations in measurement due to sensor degradation, alien environment or sensor noise and will be subjected to unknown activities. In view of practical deployment of the solution, analysis of statistical confidence over the activity class score are important metrics. In this paper, we therefore propose XAI-BayesHAR, an integrated Bayesian framework, that improves the overall activity classification accuracy of IMU-based HAR solutions by recursively tracking the feature embedding vector and its associated uncertainty via Kalman filter. Additionally, XAI-BayesHAR acts as an out of data distribution (OOD) detector using the predictive uncertainty which help to evaluate and detect alien input data distribution. Furthermore, Shapley value-based performance of the proposed framework is also evaluated to understand the importance of the feature embedding vector and accordingly used for model compression
translated by 谷歌翻译
意见摘要是创建摘要的任务,以获取用户评论中的流行意见。在本文中,我们介绍了Geodesic Summarizer(GeoSumm),这是一种新型系统,可执行无监督的提取意见摘要。 GeoSumm涉及基于编码器的表示模型,该模型将文本表示为潜在语义单元的分布。 GeoSumm通过在多个解码器层上对预训练的文本表示进行字典学习来生成这些表示。然后,我们使用这些表示形式使用新型的基于测量距离的评分机制来量化审查句子的相关性。我们使用相关得分来确定流行意见,以构成一般和特定方面的摘要。我们提出的模型GeoSumm在三个意见摘要数据集上实现了最先进的性能。我们执行其他实验来分析模型的功能,并展示跨不同域{\ x}的概括能力。
translated by 谷歌翻译
FP8是加速深度学习训练推论以外的16位格式的自然发展。在本文中,我们提出了一个8位浮点(FP8)二进制互换格式,该格式由两个编码组成-E4M3(4位指数和3位Mantissa)和E5M2(5位指数和2位指数和2位Mantissa)。尽管E5M2遵循IEEE 754惯例代表特殊值的惯例,但E4M3的动态范围是通过不代表无限态,只有一个Mantissa Bit-Pattern来扩展NAN。我们证明了FP8格式对各种图像和语言任务的功效,从而有效地匹配了16位培训课程所达到的质量。我们的研究涵盖了主要的现代神经网络体系结构 - CNN,RNN和基于变压器的模型,使所有超参数与16位基线训练课程保持不变。我们的培训实验包括大型,最多175b参数,语言模型。我们还检查了使用16位格式训练的语言模型的FP8训练后定量化,该格式抗拒固定点INT8量化。
translated by 谷歌翻译